Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Resource allocation problems in many computer systems can be formulated as mathematical optimization problems. However, finding exact solutions to these problems using off-the-shelf solvers is often intractable for large problem sizes with tight SLAs, leading system designers to rely on cheap, heuristic algorithms. We observe, however, that many allocation problems are granular: they consist of a large number of clients and resources, each client requests a small fraction of the total number of resources, and clients can interchangeably use different resources. For these problems, we propose an alternative approach that reuses the original optimization problem formulation and leads to better allocations than domain-specific heuristics. Our technique, Partitioned Optimization Problems (POP), randomly splits the problem into smaller problems (with a subset of the clients and resources in the system) and coalesces the resulting sub-allocations into a global allocation for all clients. We provide theoretical and empirical evidence as to why random partitioning works well. In our experiments, POP achieves allocations within 1.5% of the optimal with orders-of-magnitude improvements in runtime compared to existing systems for cluster scheduling, traffic engineering, and load balancing.more » « less
- 
            Abstract Osteoarthritis is the third most rapidly growing health condition associated with disability, after dementia and diabetes1. By 2050, the total number of patients with osteoarthritis is estimated to reach 1 billion worldwide2. As no disease-modifying treatments exist for osteoarthritis, a better understanding of disease aetiopathology is urgently needed. Here we perform a genome-wide association study meta-analyses across up to 489,975 cases and 1,472,094 controls, establishing 962 independent associations, 513 of which have not been previously reported. Using single-cell multiomics data, we identify signal enrichment in embryonic skeletal development pathways. We integrate orthogonal lines of evidence, including transcriptome, proteome and epigenome profiles of primary joint tissues, and implicate 700 effector genes. Within these, we find rare coding-variant burden associations with effect sizes that are consistently higher than common frequency variant associations. We highlight eight biological processes in which we find convergent involvement of multiple effector genes, including the circadian clock, glial-cell-related processes and pathways with an established role in osteoarthritis (TGFβ, FGF, WNT, BMP and retinoic acid signalling, and extracellular matrix organization). We find that 10% of the effector genes express a protein that is the target of approved drugs, offering repurposing opportunities, which can accelerate translation.more » « lessFree, publicly-accessible full text available May 29, 2026
- 
            null (Ed.)Systems for ML inference are widely deployed today, but they typically optimize ML inference workloads using techniques designed for conventional data serving workloads and miss critical opportunities to leverage the statistical nature of ML. In this paper, we present WILLUMP, an optimizer for ML inference that introduces two statistically-motivated optimizations targeting ML applications whose performance bottleneck is feature computation. First, WILLUMP automatically cascades feature computation for classification queries: WILLUMP classifies most data inputs using only high-value, low-cost features selected through empirical observations of ML model performance, improving query performance by up to 5× without statistically significant accuracy loss. Second, WILLUMP accurately approximates ML top-K queries, discarding low-scoring inputs with an automatically constructed approximate model and then ranking the remainder with a more powerful model, improving query performance by up to 10× with minimal accuracy loss. WILLUMP automatically tunes these optimizations’ parameters to maximize query performance while meeting an accuracy target. Moreover, WILLUMP complements these statistical optimizations with compiler optimizations to automatically generate fast inference code for ML applications. We show that WILLUMP improves the end-to-end performance of real-world ML inference pipelines curated from major data science competitions by up to 16× without statistically significant loss of accuracy.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available